skip to main content


Search for: All records

Creators/Authors contains: "Savage, Saiph"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. AI has revolutionized the processing of various services, including the automatic facial verification of people. Automated approaches have demonstrated their speed and efficiency in verifying a large volume of faces, but they can face challenges when processing content from certain communities, including communities of people of color. This challenge has prompted the adoption of "human-inthe-loop" (HITL) approaches, where human workers collaborate with the AI to minimize errors. However, most HITL approaches do not consider workers’ individual characteristics and backgrounds. This paper proposes a new approach, called Inclusive Portraits (IP), that connects with social theories around race to design a racially-aware human-in-the-loop system. Our experiments have provided evidence that incorporating race into human-in-the-loop (HITL) systems for facial verification can significantly enhance performance, especially for services delivered to people of color. Our findings also highlight the importance of considering individual worker characteristics in the design of HITL systems, rather than treating workers as a homogenous group. Our research has significant design implications for developing AI-enhanced services that are more inclusive and equitable. 
    more » « less
  2. Crowd-work has increased significantly in recent years, particularly among women from Latin America. However, the specific needs and characteristics of this workforce have not been studied nearly enough. For this reason, we have conducted a series of surveys, questionnaires, and design sessions directly with Latin-American users of crowd-working platforms. Our aim was to create a system to empower crowd-workers with AI enhanced tools for their day-to-day tasks. As a result, we created a customized platform, La Independiente, and two web plugins. This project is unique in that it leverages gender perspective methodologies, AI powered-systems, and public policy analysis to design smart tools that are both professionally useful and culturally relevant. 
    more » « less
    Free, publicly-accessible full text available November 30, 2024
  3. Since 2018, Venezuelans have contributed to 75% of leading AI crowd work platforms’ total workforce, and it is very likely other Latin American and Caribbean (LAC) countries will follow in the context of the post covid-19 economic recovery. While crowd work presents new opportunities for employment in regions of the world where local economies have stagnated, few initiatives have investigated the impact of such work in the Global South through the lens of feminist theory. To address this knowledge gap, we surveyed 55 LAC women on the crowd work platform Toloka to understand their personal goals, professional values, and hardships faced in their work. Our results revealed that most participants shared a desire to hear the experiences of other women crowdworkers, mainly to help them navigate tasks, develop technical and soft skills, and manage their finances more efficiently. Additionally, 75% of the women reported that they completed crowd work tasks on top of caring for their families, while over 50% confirmed they needed to negotiate their family responsibilities to pursue crowd work in the first place. These findings demonstrated a vital component lacking from the experiences of these women was a sense of connection with one another. Based on these observations, we propose a system designed to foster community between LAC women in crowd work to improve their personal and professional advancement. 
    more » « less
  4. rowdsourcing has been used to produce impactful and large-scale datasets for Machine Learning and Artificial Intelligence (AI), such as ImageNET, SuperGLUE, etc. Since the rise of crowdsourcing in early 2000s, the AI community has been studying its computational, system design, and data-centric aspects at various angles. We welcome the studies on developing and enhancing of crowdworker-centric tools, that offer task matching, requester assessment, instruction validation, among other topics. We are also interested in exploring methods that leverage the integration of crowdworkers to improve the recognition and performance of the machine learning models. Thus, we invite studies that focus on shipping active learning techniques, methods for joint learning from noisy data and from crowds, novel approaches for crowd-computer interaction, repetitive task automation, and role separation between humans and machines. Moreover, we invite works on designing and applying such techniques in various domains, including e-commerce and medicine. 
    more » « less
  5. How do you decide which papers to cite, how many, and from which particular sources? We reflect and discuss the implications of these critical questions based on our experiences in the panel and workshops on the topic of citational justice that took place at CSCW, CLIHC, and India HCI in 2021. 
    more » « less
  6. The popularity of 3D printed assistive technology has led to the emergence of new ecosystems of care, where multiple stakeholders (makers, clinicians, and recipients with disabilities) work toward creating new upper limb prosthetic devices. However, despite the increasing growth, we currently know little about the differences between these care ecosystems. Medical regulations and the prevailing culture have greatly impacted how ecosystems are structured and stakeholders work together, including whether clinicians and makers collaborate. To better understand these care ecosystems, we interviewed a range of stakeholders from multiple countries, including Brazil, Chile, Costa Rica, France, India, Mexico, and the U.S. Our broad analysis allowed us to uncover different working examples of how multiple stakeholders collaborate within these care ecosystems and the main challenges they face. Through our study, we were able to uncover that the ecosystems with multi-stakeholder collaborations exist (something prior work had not seen), and these ecosystems showed increased success and impact. We also identified some of the key follow-up practices to reduce device abandonment. Of particular importance are to have ecosystems put in place follow up practices that integrate formal agreements and compensations for participation (which do not need to be just monetary). We identified that these features helped to ensure multi-stakeholder involvement and ecosystem sustainability. We finished the paper with socio-technical recommendations to create vibrant care ecosystems that include multiple stakeholders in the production of 3D printed assistive devices. 
    more » « less
  7. null (Ed.)
    Crowdsourcing markets provide workers with a centralized place to find paid work. What may not be obvious at first glance is that, in addition to the work they do for pay, crowd workers also have to shoulder a variety of unpaid invisible labor in these markets, which ultimately reduces workers' hourly wages. Invisible labor includes finding good tasks, messaging requesters, or managing payments. However, we currently know little about how much time crowd workers actually spend on invisible labor or how much it costs them economically. To ensure a fair and equitable future for crowd work, we need to be certain that workers are being paid fairly for ALL of the work they do. In this paper, we conduct a field study to quantify the invisible labor in crowd work. We build a plugin to record the amount of time that 100 workers on Amazon Mechanical Turk dedicate to invisible labor while completing 40,903 tasks. If we ignore the time workers spent on invisible labor, workers' median hourly wage was $3.76. But, we estimated that crowd workers in our study spent 33% of their time daily on invisible labor, dropping their median hourly wage to $2.83. We found that the invisible labor differentially impacts workers depending on their skill level and workers' demographics. The invisible labor category that took the most time and that was also the most common revolved around workers having to manage their payments. The second most time-consuming invisible labor category involved hyper-vigilance, where workers vigilantly watched over requesters' profiles for newly posted work or vigilantly searched for labor. We hope that through our paper, the invisible labor in crowdsourcing becomes more visible, and our results help to reveal the larger implications of the continuing invisibility of labor in crowdsourcing. 
    more » « less
  8. null (Ed.)
    Crowdworkers depend on Amazon Mechanical Turk (AMT) as an important source of income and it is left to workers to determine which tasks on AMT are fair and worth completing. While there are existing tools that assist workers in making these decisions, workers still spend significant amounts of time finding fair labor. Difficulties in this process may be a contributing factor in the imbalance between the median hourly earnings ($2.00/hour) and what the average requester pays ($11.00/hour). In this paper, we study how novices and experts select what tasks are worth doing. We argue that differences between the two populations likely lead to the wage imbalances. For this purpose, we first look at workers' comments in TurkOpticon (a tool where workers share their experience with requesters on AMT). We use this study to start to unravel what fair labor means for workers. In particular, we identify the characteristics of labor that workers consider is of "good quality'' and labor that is of "poor quality'' (e.g., work that pays too little.) Armed with this knowledge, we then conduct an experiment to study how experts and novices rate tasks that are of both good and poor quality. Through our research we uncover that experts and novices both treat good quality labor in the same way. However, there are significant differences in how experts and novices rate poor quality labor, and whether they believe the poor quality labor is worth doing. This points to several future directions, including machine learning models that support workers in detecting poor quality labor, and paths for educating novice workers on how to make better labor decisions on AMT. 
    more » « less